26 research outputs found
PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning
This paper presents a method for adding multiple tasks to a single deep
neural network while avoiding catastrophic forgetting. Inspired by network
pruning techniques, we exploit redundancies in large deep networks to free up
parameters that can then be employed to learn new tasks. By performing
iterative pruning and network re-training, we are able to sequentially "pack"
multiple tasks into a single network while ensuring minimal drop in performance
and minimal storage overhead. Unlike prior work that uses proxy losses to
maintain accuracy on older tasks, we always optimize for the task at hand. We
perform extensive experiments on a variety of network architectures and
large-scale datasets, and observe much better robustness against catastrophic
forgetting than prior work. In particular, we are able to add three
fine-grained classification tasks to a single ImageNet-trained VGG-16 network
and achieve accuracies close to those of separately trained networks for each
task. Code available at https://github.com/arunmallya/packne
Unsupervised Network Pretraining via Encoding Human Design
Over the years, computer vision researchers have spent an immense amount of
effort on designing image features for the visual object recognition task. We
propose to incorporate this valuable experience to guide the task of training
deep neural networks. Our idea is to pretrain the network through the task of
replicating the process of hand-designed feature extraction. By learning to
replicate the process, the neural network integrates previous research
knowledge and learns to model visual objects in a way similar to the
hand-designed features. In the succeeding finetuning step, it further learns
object-specific representations from labeled data and this boosts its
classification power. We pretrain two convolutional neural networks where one
replicates the process of histogram of oriented gradients feature extraction,
and the other replicates the process of region covariance feature extraction.
After finetuning, we achieve substantially better performance than the baseline
methods.Comment: 9 pages, 11 figures, WACV 2016: IEEE Conference on Applications of
Computer Visio
Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights
This work presents a method for adapting a single, fixed deep neural network
to multiple tasks without affecting performance on already learned tasks. By
building upon ideas from network quantization and pruning, we learn binary
masks that piggyback on an existing network, or are applied to unmodified
weights of that network to provide good performance on a new task. These masks
are learned in an end-to-end differentiable fashion, and incur a low overhead
of 1 bit per network parameter, per task. Even though the underlying network is
fixed, the ability to mask individual weights allows for the learning of a
large number of filters. We show performance comparable to dedicated fine-tuned
networks for a variety of classification tasks, including those with large
domain shifts from the initial task (ImageNet), and a variety of network
architectures. Unlike prior work, we do not suffer from catastrophic forgetting
or competition between tasks, and our performance is agnostic to task ordering.
Code available at https://github.com/arunmallya/piggyback
Solving Visual Madlibs with Multiple Cues
This paper focuses on answering fill-in-the-blank style multiple choice
questions from the Visual Madlibs dataset. Previous approaches to Visual
Question Answering (VQA) have mainly used generic image features from networks
trained on the ImageNet dataset, despite the wide scope of questions. In
contrast, our approach employs features derived from networks trained for
specialized tasks of scene classification, person activity prediction, and
person and object attribute prediction. We also present a method for selecting
sub-regions of an image that are relevant for evaluating the appropriateness of
a putative answer. Visual features are computed both from the whole image and
from local regions, while sentences are mapped to a common space using a simple
normalized canonical correlation analysis (CCA) model. Our results show a
significant improvement over the previous state of the art, and indicate that
answering different question types benefits from examining a variety of image
cues and carefully choosing informative image sub-regions
Implicit Warping for Animation with Image Sets
We present a new implicit warping framework for image animation using sets of
source images through the transfer of the motion of a driving video. A single
cross- modal attention layer is used to find correspondences between the source
images and the driving image, choose the most appropriate features from
different source images, and warp the selected features. This is in contrast to
the existing methods that use explicit flow-based warping, which is designed
for animation using a single source and does not extend well to multiple
sources. The pick-and-choose capability of our framework helps it achieve
state-of-the-art results on multiple datasets for image animation using both
single and multiple source images. The project website is available at
https://deepimagination.cc/implicit warping/Comment: To be published at NeurIPS 202